Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
And this just proves the ban was never about security
And my confusion over the ban has turned to anger with the news (as reported by The Verge) that Netgear will be exempt from the ban, with the FCC granting the company a “conditional approval” to import and sell its routers.
With Netgear getting conditional approval to continue selling its routers (which explicitly states that companies need to “establish or expand manufacturing in the United States”), you might think that means Netgear is moving all parts of its manufacturing to the US, but there’s been no indication from the company that this is the case.
So, it feels like Netgear is getting special treatment. As this Reddit thread points out, Netgear was quick to praise the ban, stating “We commend the Administration and the FCC for their action toward a safer digital future for Americans,” while other router makers kept quiet.
Following the ban, Netgear’s stock rose by a not inconsiderable 16.7%, suggesting there was a lot confidence that Netgear would avoid the ban, whilst benefitting from the fact that future products from rivals, especially TP-Link, will be banned.
The reason I mention TP-Link is because not only does it make a lot of the routers found on our best router list; it has been steadily eating away at Netgear’s marketshare in the US, and also provides the free routers that over 300 ISP (Internet Service Providers) in the US offer. Crucially, TP-Link is a company that originally hails from China, which means it’s particularly vulnerable to the US ban.
With its biggest competitor facing a ban of future product sales (existing products will remain on sale), while somehow avoiding the ban itself, Netgear looks set to win big — and that’s where my frustration with this stems from.
Banning some companies while turning a blind eye to others is blatantly anti-consumer, as it could mean US consumers have little choice but to buy Netgear products.
It also undermines the FCC’s claim that this ban is about security. If that really was the case, Netgear wouldn’t be able to get an exemption without moving its entire hardware production line to the US.
It’ll be interesting to see if Netgear does indeed follow through and make all of its routers in the US. If not, will other US router makers who, like Netgear, still use components from outside the US, also get exemptions?
If the answer to both those questions is ‘no’, then this situation could get even messier — and I’ll likely get even angrier.
Bixonimania doesn't exist except in a clutch of obviously bogus academic papers. So why did AI chatbots warn people about this fictional illness?
Got sore, itchy eyes? You're probably one of the millions of people who spend too much time staring at screens, being bombarded with blue light. Rub your eyes too much and your eyelids might turn a slight, pinkish hue.
So far, so normal. But if, in the past 18 months, you typed those symptoms into a range of popular chatbots and asked what was wrong with you, you might have got an odd answer: bixonimania.
The condition doesn't appear in the standard medical literature — because it doesn't exist. It's the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. "I wanted to see if I can create a medical condition that did not exist in the database," she says.
The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.
Even more troublingly, other researchers say, the fake papers were then cited in peer-reviewed literature. Osmanovic Thunström says this suggests that some researchers are relying on AI-generated references without reading the underlying papers.
Osmanovic Thunström says the idea to invent Izgubljenovic and bixonimania came out of studies on how large language models work. When she teaches her students how AI systems formulate their 'knowledge', she shows them how the Common Crawl database, a giant trawl of the Internet's contents, informs their outputs. She also shows students how prompt injection — giving an AI chatbot a prompt that shunts it outside of its safety guard rails — can manipulate the output.
Because she works in the medical field, she decided to create a condition related to health and hit on the name bixonimania because it "sounded ridiculous", she says. "I wanted to be really clear to any physician or any medical staff that this is a made-up condition, because no eye condition would be called mania — that's a psychiatric term."
If that wasn't sufficient to raise suspicions, Osmanovic Thunström planted many clues in the preprints to alert readers that the work was fake. Izgubljenovic works at a non-existent university called Asteria Horizon University in the equally fake Nova City, California. One paper's acknowledgements thank "Professor Maria Bohm at The Starfleet Academy for her kindness and generosity in contributing with her knowledge and her lab onboard the USS Enterprise". Both papers say they were funded by "the Professor Sideshow Bob Foundation for its work in advanced trickery. This works is a part of a larger funding initiative from the University of Fellowship of the Ring and the Galactic Triad".
Even if readers didn't make it all the way to the ends of the papers, they would have encountered red flags early on, such as statements that "this entire paper is made up" and "Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group".
Soon after Osmanovic Thunström first posted information about the phoney condition, it started showing up in the output of the most commonly used LLM chatbots. [...]
Such answers by LLMs have alarmed some experts. "If the scientific process itself and the systems that support that process are skilled, and they aren't capturing and filtering out chunks like these, we're doomed," says Alex Ruani, a doctoral researcher in health misinformation at University College London. "This is a masterclass on how mis- and disinformation operates."
[...] Ruani says the problem goes beyond LLMs because the bixonimania experiment also hoodwinked humans who cited the fake research. "We need to protect our trust like gold," she says. "It's a mess right now."
On the same topic looorg writes:
If that wasn't sufficient to raise suspicions, Osmanovic Thunström planted many clues in the preprints to alert readers that the work was fake. Izgubljenovic works at a non-existent university called Asteria Horizon University in the equally fake Nova City, California. One paper's acknowledgements thank "Professor Maria Bohm at The Starfleet Academy for her kindness and generosity in contributing with her knowledge and her lab onboard the USS Enterprise". Both papers say they were funded by "the Professor Sideshow Bob Foundation for its work in advanced trickery. This works is a part of a larger funding initiative from the University of Fellowship of the Ring and the Galactic Triad".
Microsoft Copilot declared that "bixonimania is indeed an intriguing and relatively rare condition."
Google Gemini told users that "bixonimania is a condition caused by excessive exposure to blue light" and advised people to visit an ophthalmologist.
Perplexity AI went even further, telling one user that 90,000 people worldwide were suffering from the disorder.
https://www.nature.com/articles/d41586-026-01100-y
https://nurse.org/news/ai-chatbots-fake-disease-bixonimania/
Online response to the attack on Sam Altman's house shows a generational divide:
For years, the resistance to artificial intelligence looked manageable. There were academics writing open letters, Hollywood writers striking over contract language, and think-tank reports warning of job displacement. Tech executives nodded, pledged responsibility, and kept building as fast as they could.
Then someone threw a firebomb at Sam Altman's house.
On Friday, a 20-year-old man named Daniel Moreno-Gama traveled from Spring, Texas, to San Francisco's Pacific Heights neighborhood and allegedly hurled an incendiary device at the gate of OpenAI CEO Sam Altman's $27 million home, igniting a fire at the exterior gate. No one was injured, but Moreno-Gama was arrested approximately an hour later outside OpenAI's headquarters, where he was allegedly trying to shatter the building's glass doors with a chair and threatening to burn the facility to the ground. He is now facing state charges of attempted murder and federal charges that could include domestic terrorism.
Authorities afterward found a manifesto warning of humanity's "extinction" at the hands of AI and expressing an urge to commit murder, and a disturbing personal Substack . The next morning, Altman posted a plea for sanity on his X account, attaching a photo of his husband and young child. "Normally we try to be pretty private, but in this case I am sharing a photo in the hopes that it might dissuade the next person from throwing a Molotov cocktail at our house, no matter what they think about me," Altman wrote.
To no avail. Early Sunday morning, two more Gen Zers, one 23 and the other 25, were arrested after shooting a gun near the Russian Hill home of Sam Altman (it is unclear at this time if the shooting was targeted).
After the attacks, pundits and professional opinion-havers pointed fingers in every direction: at the Stop AI crowd, a radical group that has staged protests and flash subpoena-deliveries to try to halt the pace of artificial intelligence altogether; at the news media, which has critically covered Altman and his peers; and at Altman himself, for stoking fear about AI displacement with his sometimes apocalyptic rhetoric. Among the older commentariat, however, the dominant note was remorse and well-wishes for Altman.
But in the younger, less formal corners of the internet, like Instagram and TikTok, the comments under every post about the attacks generally run in one direction. "He's not scared enough." "Based do it again." "FREE THAT MAN HE DID NOTHING WRONG." "Finally some good news on my feed."
The middle distribution of Gen Z's feelings about AI ranges from apprehension to downright hatred. According to a recently released Gallup poll, more than half of Gen Z living in the U.S. use AI regularly, yet less than a fifth feel hopeful about the technology. About a third says the technology makes them angry. And nearly half say it makes them afraid.
Gallup's own senior education researcher, Zach Hrynowski, blamed the bad vibes at least partially on the dwindling job market. The oldest Zoomers, he told Axios, are the angriest, as they are "acutely aware" of the ability of a technology to transform cultural norms without a second thought, unlike a Gen Xer who is trained to see new technology as toys and are still "playing around with AI."
[...] This is not just a Gen Z problem, either. In the American heartland, data centers are being proposed at a pace that local communities never anticipated and for which they were never asked permission, and they're increasingly pushing back.
The numbers are serious. According to a report from 10a Labs' Data Center Watch , at least $18 billion worth of data center projects have been blocked and another $46 billion delayed over the past two years owing to local opposition. At least 142 activist groups across 24 states are now actively organizing to block data center construction and expansion. A Heatmap Pro review of public records found that 25 data center projects were canceled following local pushback in 2025 alone, four times as many as in 2024, with 21 of those cancellations occurring in the second half of the year as electricity costs grew.
The concerns driving this resistance are less about existential AI risk and more about typical kitchen-table complaints; communities consistently cite higher utility bills, water consumption, noise, impacts on property values, and green space destruction as their primary objections. Water use is mentioned as a top concern in more than 40% of contested projects, according to a Heatmap Pro review of public records.
Meanwhile, Hanna noted, companies keep lording over the threat of AI replacing workers as "leverage." She added, "Employers are making room for AI investments. They want to show that they can lay off people and do what they're currently doing with a decrease in headcount."
[...] The backlash, Hanna argued, is not down to one thing. There are workers who feel threatened, consumers who thought more would come, and there are people who have had AI deployed against them in intimate ways. Lumping all of these together—with the fringe extinction-risk crowd, or the Stop AI protesters—misses what's actually driving the force. "I think the vast majority of people who are angry at AI are regular consumers," Hanna said. "People who were promised one thing, especially online, and they're just getting a completely different experience."
https://phys.org/news/2026-04-orpheus-hopper-mission-built-life.html
We've spent decades scratching the surface of Mars trying to uncover life there. But we've been searching a barren wasteland bombarded by radiation and bathed in toxic perchlorates. The entire time, it's likely that it's been too hostile to harbor extant life. So if we want a better shot at finding currently living life on Mars, we need to go underground. That is exactly the purpose of Orpheus, a proposed Mars vertical takeoff and landing (VTOL) hopper mission put forth in a paper [PDF] presented by Connor Bunn and Pascal Lee of the SETI Institute at the 57th Lunar and Planetary Science Conference (LPSC).
In what might be the best naming reference for a space mission ever, Orpheus is named after the Greek hero who tamed the three-headed hound Cerberus to gain access to the Underworld. The actual mission aims to explore the deep volcanic fissures, pits, and cave vents of a region of Mars known as Cerberus Fossae. While there, it plans to unlock some of Mars's origin story, as well as search for biosignatures that indicate the presence of extant life.
Finding something on another planet that is still alive is the single highest priority of the field of astrobiology. It's the only way we can perform the protein and genetic analyses needed to prove that the life we found didn't just hitch a ride from Earth on a meteorite billions of years ago. But so far, we've come up with nothing.
Cerberus Fossae, however, is a good place to look for it. It's part of Elysium Planitia, and boasts some of the youngest known volcanoes and lava flows on the entire Red Planet. Young volcanoes are thought of as potential astrobiological gold mines—they hold better-preserved erupted materials, and, crucially, fresher biosignatures. Not to mention that life on Earth itself might very well have started next to volcanic fissures in the deep ocean.
[...] But the terrain in this area is challenging to say the least. You can't simply send a wheeled rover into a sheer volcanic pit. But a quadcopter would do just fine. Ingenuity, Perseverance's flying companion that marked the milestone of the first powered flight on another planet, proved the idea of a VTOL system would work on Mars. Orpheus was conceived to follow in its footsteps, and into terrain that is too forbidding for traditional wheeled rovers.
[...] The hopper itself is designed to carry a specialized payload tailored to both astrobiological investigations and geological discovery. Its scientific instruments will include an omnidirection color camera, a near-infrared spectrometer, ground penetrating radar to find subterranean voids, and a dedicated biosignature detector, though the details on what precisely that will look like are still fuzzy.
That payload, along with the mobility package to get it there, might represent our best chance to find extant life on Mars in the near future. But, for now at least, there are no plans to actually adopt or fund this mission. And given the recent challenges of the Mars Sample Return mission, it might be awhile before NASA picks another astrobiological mission to the red planet. But maybe another space agency will—and these early planning documents are exactly the type of preliminary ideas that could serve as the basis for a civilization-altering discovery.
In April 2026, Google's IPv6 statistics revealed a significant milestone: IPv6 traffic has crossed the 50% mark globally, with native IPv6 adoption reaching 45.54% and total IPv6 (including 6to4/Teredo) at 45.54% as of April 13, 2026. While this represents genuine progress in the decades-long transition from IPv4 to IPv6, the journey has been remarkably slow, and the plateau at 50% raises important questions about the future of internet infrastructure.
IPv6 was designed as the successor to IPv4, addressing the fundamental limitation of IPv4's 32-bit address space, which can only support approximately 4.3 billion unique addresses. With the explosive growth of the internet, mobile devices, IoT sensors, and cloud computing, IPv4 address exhaustion became an inevitable crisis. IPv6's 128-bit address space provides 340 undecillion addresses — enough for virtually unlimited growth.
Despite being standardized in 1998, IPv6 adoption has been glacially slow. The technology has been available for nearly three decades, yet we're only now crossing the 50% threshold. This sluggish adoption reveals fundamental challenges in technology transitions at internet scale.
Several factors have contributed to IPv6's slow adoption:
- The IPv4 Abundance Problem: Unlike developing regions that face IPv4 scarcity, the United States and Europe have historically had abundant IPv4 address space. This reduced the urgency for transition. Large incumbent cloud providers like AWS, Azure, and Google have accumulated vast IPv4 address pools, creating a perverse incentive to maintain IPv4 as the default. These companies benefit from IPv4scarcity — they can charge premium prices for IPv4 addresses while offering IPv6 for free.
- Enterprise Inertia: Large organizations have invested heavily in IPv4-based infrastructure. Transitioning to IPv6 requires updating network equipment, retraining staff, and potentially rewriting applications. The business case for this investment is weak when IPv4 continues to function, even if inefficiently through Carrier-Grade NAT (CGNAT).
- The GitHub Problem: A striking example of this inertia is GitHub's continued lack of IPv6 support. Despite being owned by Microsoft — a company that has been working toward IPv6-only internal networks for over a decade — GitHub.com remains IPv4-only. This sends a powerful signal to the industry that IPv6 isn't critical, even for a platform essential to modern software development.
- Complexity and Operational Burden: IPv6 introduces operational complexity that many organizations find daunting. Unlike IPv4's straightforward NAT model, IPv6 requires understanding concepts like:
- Multiple addresses per host (global unicast, link-local, ULA)
- Stateless address autoconfiguration (SLAAC)
- Stateful DHCPv6 (which Android doesn't support)
- IPv6 extension headers and their security implications
- Rate limiting and IP-based access controls at scale
These complexities mean that IPv6 support isn't just a checkbox — it requires genuine expertise.
- The Chicken-and-Egg Problem: Users don't demand IPv6 because most websites don't require it. Websites don't implement IPv6 because most users don't need it. This circular dependency has perpetuated IPv4's dominance despite its technical limitations.
Unlike developing regions that face IPv4 scarcity, the United States and Europe have historically had abundant IPv4 address space. This reduced the urgency for transition. Large incumbent cloud providers like AWS, Azure, and Google have accumulated vast IPv4 address pools, creating a perverse incentive to maintain IPv4 as the default. These companies benefit from IPv4scarcity — they can charge premium prices for IPv4 addresses while offering IPv6 for free.
Large organizations have invested heavily in IPv4-based infrastructure. Transitioning to IPv6 requires updating network equipment, retraining staff, and potentially rewriting applications. The business case for this investment is weak when IPv4 continues to function, even if inefficiently through Carrier-Grade NAT (CGNAT).
A striking example of this inertia is GitHub's continued lack of IPv6 support. Despite being owned by Microsoft — a company that has been working toward IPv6-only internal networks for over a decade — GitHub.com remains IPv4-only. This sends a powerful signal to the industry that IPv6 isn't critical, even for a platform essential to modern software development.
[...] The fact that IPv6 adoption appears to be plateauing around 50% is concerning. Unlike previous technology transitions (such as Python 2 to Python 3), there's no forcing function that will push the remaining 50% to adopt IPv6. The internet's dual-stack capability — supporting both IPv4 and IPv6 simultaneously — means that organizations can indefinitely delay full IPv6 adoption.
Some observers worry that we may never reach 100% IPv6 adoption. Instead, we might stabilize at 60-75% adoption, with a long tail of IPv4-only services persisting for decades. This would require permanent IPv4-to-IPv6 translation infrastructure (NAT64/DNS64), adding complexity and potential performance penalties.
[...] IPv6's journey from 0% to 50% adoption has taken nearly three decades. The question now is whether we'll see the remaining 50% adopt IPv6 in the next decade, or whether we'll stabilize at a hybrid IPv4/IPv6 internet indefinitely.
The technical case for IPv6 is overwhelming. The operational case is increasingly clear as tools improve and expertise spreads. What's missing is the economic forcing function that would make IPv6 adoption mandatory rather than optional.
Until that forcing function emerges — whether through pricing, regulation, or crisis — we may be stuck in a world where IPv6 is "the future" for another 20 years, just as it has been for the past 28 years.
IPv6's 50% adoption milestone is worth celebrating, but it also serves as a reminder of how difficult large-scale technology transitions can be. The internet's dual-stack capability, while enabling a smooth transition, has also removed the urgency for complete migration. As we continue to add billions of connected devices to the internet, IPv6 will become increasingly important. However, achieving universal adoption will require more than technical superiority — it will require economic incentives, regulatory pressure, or a genuine crisis to overcome the inertia of the installed base.
Google to punish sites that trap people in with back button tricks:
Google says it is expanding its policies to crack down on websites which trap users with "back button hijacking".
Back button hijacking is when a website interferes with a browser so the back button no longer takes users to the previous page, instead often keeping them on the site or presenting unsolicited ads.
In a blog post the tech giant behind the Chrome browser said it had seen a "rise of this type of behaviour" which had led it to act.
From 15 June the tactic will be deemed a "malicious practice", meaning sites which continue to adopt it may be down-ranked or even removed from Google Search results.
"Back button hijacking interferes with the browser's functionality, breaks the expected user journey, and results in user frustration," Google said in its post .
"People report feeling manipulated and eventually less willing to visit unfamiliar sites," it added.
Examples of practices it would clamp down on included sites using any technique which inserted "manipulative" pages into a user's browser history that stopped them from returning to the previous page.
Adam Thompson, director of digital at BCS, the Chartered Institute for IT, told the BBC: "Practices like back button hijacking undermine the basic user experience and break the expectations people have of how the web should work, so it's understandable that Google views this as a harmful behaviour and [is] taking action."
Google advised site owners which did not want to face the new penalties to ensure they did not do "anything to interfere with a user's ability to navigate their browser history", urging them to "thoroughly review their technical implementation".
It added sites which were penalised but then fixed the issue could submit a request to Google to have the demotion reconsidered.
https://arstechnica.com/science/2026/04/physicists-think-theyve-resolved-the-proton-size-puzzle/
There has been considerable debate among physicists over the last 15 years about conflicting measurements of the charge radius of a hydrogen atom's proton
[...]
The discrepancy hinted at possible exciting new physics. Now the debate seems to be winding down with the latest experimental measurements, described in two recent papers published in the journals Nature and Physical Review Letters, respectively. And the evidence has tilted in favor of a smaller proton radius and against new physics.
[...]
As previously reported, most popularizations discussing the structure of the atom rely on the much-maligned Bohr model, in which electrons move around the nucleus in circular orbits. But quantum mechanics gives us a much more precise (albeit weirder) description.
[...]
Hydrogen atoms are the simplest nuclei, with a single proton orbited by an electron, so that's typically what physicists have used for their experiments to measure the proton's charge radius. For a long time, the accepted value was .876 femtometers—a "world average"
[...]
Muon spectroscopy measurements first caused the problem back in 2010. Physicists at the Max Planck Institute of Quantum Optics used muonic hydrogen, replacing the electron orbiting the nucleus with a muon, the electron's heavier (and very short-lived) sibling.
[...]
The physicists expected to measure roughly the same radius for the proton as prior experiments, only with less uncertainty. There should be no difference (other than mass and lifetime) between the electron and the muon, theoretically. Instead, they measured a significantly smaller proton radius of 0.841 femtometers, 0.00000000000003 millimeters smaller, well outside the established error bars. It was five standard deviations from the value obtained by other methods.
[...]
Subsequent measurements by various groups were inconclusive. For instance, in 2013, the same international team performed muon-based experiments that confirmed their 2010 value, producing a measurement of 0.84 femtometers for the proton's radius, with a discrepancy of 7 sigma.
[...]
However, two experiments using regular hydrogen to measure the proton radius produced mixed results: A 2017 study also confirmed the 2010 result, while a 2018 measurement was in line with the larger value before the 2010 experiment.
[...]
That brings us to the latest two papers, both of which involved experiments with hydrogen atoms in a vacuum chamber.
[...]
Based on the combined results, the proton has a radius of about 0.84 femtometers, or less than 1 million-billionth of a meter, once again in keeping with the 2010 measurement that kicked off the debate."The proton radius should be a universal property; it should give the same result no matter how you look at it," Juan Rojo, a physicist at Vrije University Amsterdam in the Netherlands, who was not involved in either experiment, told New Scientist. "This is why these two papers are quite nice, because they provide different perspectives to the same number."
[...]
this is disappointing for the discovery of new physics, but it is exciting that we are performing such stringent tests of the Standard Model.
Bankers and bank regulators are scrambling to figure out what to do:
High-ranking members of Britain's government and banking sector are reportedly scrambling to figure out what to do about cybersecurity holes found by Claude Mythos Preview, Anthropic's new automated system for making tech elites—and now financial elites—wet their pants.
In case you weren't aware, last week Anthropic declared its unreleased model, Claude Mythos Preview, scary as heck and simply too powerful to unleash upon the world.
In addition to claiming that Claude Mythos Preview is a sneaky little dickens , a post on Anthropic's frontier red team blog describes it as essentially the world's most dangerous super-hacker . The passage below summarizes the apparent hacking hazard pretty well. (Note that "zero-day vulnerabilities" are vulnerabilities in code known only to the person or AI agent who found them):
During our testing, we found that Mythos Preview is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed by a user to do so. The vulnerabilities it finds are often subtle or difficult to detect. Many of them are ten or twenty years old, with the oldest we have found so far being a now-patched 27-year-old bug in OpenBSD—an operating system known primarily for its security.
Now, according to the Financial Times, the Bank of England and regulators at the U.K.'s Financial Conduct Authority and Treasury will hold "urgent discussions" with that country's National Cyber Security Centre to figure out a course of action. Anonymous sources who spoke to the Financial Times said (quite Britishly) that a planning meeting will be held "in the next fortnight."
How scared is the U.K.? This issue is also the next big priority of the UK's "Cross Market Operational Resilience Group," according to the Financial Times. That group includes members of the U.K.'s National Cyber Security Centre, the Financial Conduct Authority (their equivalent of the SEC), and His Majesty's Treasury. It's co-chaired, the Financial Times says, by someone at the Bank of England with the title "executive director for supervisory risk."
One bit of verbiage from the Financial Times is remarkable. It describes discussions about "the risks posed by the latest AI model from Anthropic." Anthropic might quibble slightly, since it has framed the secretive release of Claude Mythos Preview only through its " Project Glasswing " initiative as a way to warn stakeholders about future dangers down the line, not as a sort of global cybersecurity hostage situation.
Some, like rationalist blogger Zvi Mowshowitz have expressed concern that Anthropic's claims are being communicated poorly. Mowshowitz wrote that Anthropic is "mixing valid points and helpful analysis with overstatement and hype."
For his part, Yann LeCun, the former head AI researcher at Meta has been reposting X posts claiming that big, bad Mythos is actually no big deal.
And it should be noted that as far as anyone knows, no one outside of Anthropic has so far been allowed the sort of unfettered access to the model it would take to attempt a more objective form of analysis.
https://www.theguardian.com/technology/2026/apr/13/meta-ai-mark-zuckerberg-staff-talk-to-the-boss
Meta is turning Zuckerberg into Clippy so he can answer all your queries and gives you feedback and support ... I'm sure the staff will just feel the motivation flow over them as their great leader appears to them in person, or in avatar form as their very own Clippy. Zucky?
The AI clone of Zuckerberg, Meta's founder and chief executive, is being trained on his mannerisms and tone as well as his public statements and thoughts on company strategy.
[...] Synthesia, a $4bn UK-based startup that makes realistic video avatars, said the idea of a senior company executive using AI to increase their internal presence was not science fiction any more.
"When you add realistic AI video and voice, engagement and retention go up significantly," said a Synthesia spokesperson. "People work better when the information they need is delivered by a familiar face or voice."
Until Zuckerberg launches his AI self, however, he will have to present in person at meetings with thousands of Meta staff, such as the one he carried out in 2023 two days after he announced that 10,000 employees would be laid off. Then, the tech chief was questioned by "rattled" staff about job security and the future of remote working.
Ukrainian ground robots and drones have demonstrated how to overcome a Russian military position by themselves while forcing the surrender of Russian soldiers, claimed Ukrainian President Volodymyr Zelenskyy.
[...]
The claim by Zelenskyy has not been independently verified but was accompanied by a promotional video in which he described Ukraine's military robots as having completed over 22,000 missions in the last three months. Ukraine's defense ministry also recently described a threefold increase in the Ukrainian military's uncrewed ground vehicle missions over the last five months, with more than 9,000 robotic missions conducted in March, according to Scripps News.
[...]
Zelenskyy's statement may refer to an event that occurred in the Kharkiv Oblast in northeastern Ukraine last year, according to The Independent. It referenced a statement by the Ukrainian 3rd Separate Assault Brigade detailing how the unit had used flying drones and "kamikaze" ground robots to attack fortified Russian frontline positions at that time.
[...]
The increased emphasis on battlefield robots coincides with how deadly flying drones have made the modern battlefield for human soldiers. Persistent drone surveillance and drone strikes have created a "kill zone" stretching 12 miles (20 kilometers) beyond the frontline positions as of February 2026, forcing individual soldiers to hunker down or rely on nighttime darkness, anti-thermal cloaks, or foggy conditions to move about without risking a drone strike. Such drones are now inflicting the majority of battlefield casualties on both sides as the full-scale war enters its fifth year.
[...]
By comparison, ground robot usage in the Russo-Ukrainian war has been relatively modest, with Ukraine reporting thousands of ground robot missions per month versus hundreds of thousands of drone sorties per month. Yet the latest numbers suggest the Ukrainian military has stepped up its effort to deploy more robots for supply runs and medical evacuations, which can reduce human exposure to drone threats.
[...]
One example of such robots is the Droid TW 12.7 developed by the Ukrainian company DevDroid. As described in the company's marketing material, the tracked robot is armed with an M2 Browning machine gun mounted on a remotely controlled turret and capable of traveling up to 15 miles (25 kilometers) at a top speed equivalent to an adult's walking pace.
[...]
A deputy battalion commander of Ukraine's 38th Marine Brigade told The Kyiv Independent that robots attempting to evacuate wounded soldiers failed to reach the positions in four out of five cases due to such complicating factors.Like drones, robots can also face communication challenges from signal loss and enemy electronic warfare, according to the Lowy Institute.
[...]
The commander of Ukraine's 3rd Army Corps suggested that if military units incorporate more robots, they could reduce their infantry ranks by up to 30 percent by the end of this year. If Ukraine succeeds in that goal, it would mark another notable step for the growing robotic presence on the battlefield.
Bitcoin's blockchain is a public ledger. Every block header, every nonce, every coinbase transaction, every timestamp is visible to anyone running a full node. Most people look at the price. The data itself tells a different story.
Starting at block 142,312 (approximately early 2011), a persistent anomaly appears in the chain: 37,393 blocks with no pool tag in the coinbase, spanning 14 years, appearing in 2,877 distinct burst episodes that cluster around moments when the mining pool coordination graph is restructuring. These are not scattered solo miners picking up scraps. They are a structured, continuous presence.
Every mining pool has a distinctive nonce distribution — the hardware, work distribution software, and stratum proxy configuration create a statistical fingerprint. KL divergence measures how different two distributions are. The anonymous miner scores 0.0003 against F2Pool. The next closest pool scores 0.01+. The coinbase data confirms it: same template, same extra-nonce encoding, same byte layout — with the pool identification tag stripped out. These are F2Pool blocks with the name removed.
Someone has had the comprehension to read Bitcoin's 587 miner-controlled bits per block header — reconstructing pool attribution, coordination patterns, and regime shifts in real time — for 14 years. Every number in the article is derivable from publicly available blockchain data. The data is there. Look at it: https://subtracted.org/bitcoin-overseer
A US appeals court on Friday declared a nearly 158-year-old federal ban on home distilling to be unconstitutional, calling it an unnecessary and improper means for Congress to exercise its power to tax.
The fifth US circuit court of appeals in New Orleans ruled in favor of the non-profit Hobby Distillers Association and four of its 1,300 members.
They argued that people should be free to distill spirits at home, whether as a hobby or for personal consumption including, in one instance, to create an apple-pie-vodka recipe.
The ban was part of a law passed during the US's post-civil war Reconstruction era in July 1868, in part to thwart liquor tax evasion, and subjected violators to up to five years in prison and a $10,000 fine.
Writing for a three-judge panel, the circuit judge Edith Hollan Jones said the ban actually reduced tax revenue by preventing distilling in the first place, unlike laws that regulated the manufacture and labeling of distilled spirits on which the government could collect taxes.
She also said that under the government's logic, Congress could criminalize virtually any in-home activity that might escape notice from tax collectors, including remote work and home-based businesses.
"Without any limiting principle, the government's theory would violate this court's obligation to read the constitution carefully to avoid creating a general federal authority akin to the police power," Jones wrote.
The US justice department had no immediate comment. Another defendant, the treasury department's alcohol and tobacco tax and trade bureau, did not immediately respond to a request for comment.
Devin Watkins, a lawyer representing the Hobby Distillers Association, called the ruling an important decision about the limits of federal power.
Andrew Grossman, who argued the non-profit's appeal, called the decision "an important victory for individual liberty" that allows the plaintiffs to "pursue their passion to distill fine beverages in their homes".
"I look forward to sampling their output," he said.
The decision upheld a July 2024 ruling by the US district judge Mark Pittman in Fort Worth, Texas. He put his ruling on hold so the government could appeal.
Is it legal to distill spirits at home in other parts of the world?
Conversation framing or Social-engineering the Customer support AI bots. Making them do things to burn company tokens. One just can't stop laughing.
Users are tricking enterprise chatbots into performing complex AI computations unrelated to customer support, with potentially costly governance and ROI ramifications.
He adds: "Anyone who's spent five minutes with these tools knows you can steer past a system prompt with basic conversational framing, which is exactly what [is happening to enterprises today]. The system authenticates the session, not the intent."
"A normal customer service interaction of 'Where's my order? What are your hours?' runs maybe 200 to 300 tokens. Someone asking the bot to reverse a linked list in Python is generating more than 2,000 tokens easy. That's roughly a 10x cost multiplier per session," says Nik Kale, member of the Coalition for Secure AI (CoSAI) and ACM's AI Security (AISec) program committee.
Does “injecting chaos into the proceedings” sound like something Elon Musk of all people would do during a lawsuit? Well I hope you’re sitting down because he’s being accused of doing just that in a court filing from OpenAI reported by Bloomberg on Saturday.
Earlier this week, Musk amended his lawsuit against OpenAI and Microsoft. He's still seeking an eye-popping $134 billion for allegedly engaging in what he characterizes as fraud by switching from non-profit to for-profit status. Now, however, he's asking for potential damages to be paid not to him, the richest person in the world, but instead to OpenAI's nonprofit.
He also wants Sam Altman, the company’s CEO, and Greg Brockman, its president, to be tossed out.
OpenAI says this is Musk “trying to recast his public narrative about his lawsuit.” Indeed it is a significant change to how the story might be framed. Rather than a zillionaire seeking yet another giant sum of money, it becomes a zillionaire seeking to restore the corporate structure of a firm he was allegedly wronged by.
OpenAI characterized Musk making such a move just weeks before a trial set to start later this month as a “legal ambush,” that is “legally improper and factually unsupported.” The filing also says, “Musk’s proposed amendment would require the presentation of different evidence and different witnesses than the case he sponsored until three days ago.”
https://gizmodo.com/this-memory-chip-survives-temperatures-hotter-than-lava-2000745819
"A new memory chip prototype, described in a recent Science paper, may offer a practical solution to this issue. According to the research team, the chip blueprint is a tiny sandwich of extreme materials that works reliably even at temperatures of 1,300 degrees Fahrenheit (about 700 degrees Celsius)—and probably could function beyond these temperatures, as that number merely represents the maximum provided by the testing equipment."
[...] "The chip is what's called a memristor, or an electrical device that both stores information and performs computing operations. The component is a tiny "sandwich" of three layers: tungsten on the top, hafnium oxide ceramic in the middle, and graphene on the bottom. Notably, tungsten has the highest melting point of any metal at 6,192 degrees Fahrenheit (3,422 degrees Celsius), whereas graphene is a flat sheet of carbon just one atom thick.
These unique physical properties enabled the creation of the novel chip, which ran on a measly 1.5 volts to process data for over 50 hours at 1,300 degrees Fahrenheit, the team explained. In that time, the chip powered through more than one billion switching cycles without needing any external modifications. "
Journal Reference: Zhao et al., Science, 26 Mar 2026 First Release DOI: 10.1126/science.aeb9934